5 research outputs found

    Analysis of an air-spaced patch antenna near 1800 MHz

    Get PDF
    Microstrip antennas are a type of printed antenna which consists of a patch on top of a grounded substrate. A major limitation for the performance of the patch antenna is the dielectric substrate. The idea of using air as dielectric was therefore considered to overcome that limitation because air has the lowest permittivity and no loss. The goal of this work is to build an air-spaced patch antenna, with the minimum resonant frequency at 1800 MHz and with a return loss of at least 10 dB. This work is novel because the air-spaced patch antenna has not been extensively studied. Existing literature on patch antennas with dielectric were used for the design of the antenna (dimensions of the patch, ground plane and height) and to understand the principles of operation of microstrip patch antennas in general. Simulations using the NEC code and experiments in the RF laboratory were used for this air-spaced patch antenna study. The Numerical Electromagnetic Code (NEC) was used as the simulation tool in this work. The air-spaced patch antenna was simulated to find a trend for the variation of the return loss and impedance with the resonant frequency. Simulation also helped determine cases that will not be meaningful to explore in the experiment. The experiment was done in the RF laboratory of Marquette University College of Engineering. Two procedures were used to calculate the patch dimensions using two different sources ([2], [3]). They lead to two patch antennas that were tested. For each antenna, the height of the dielectric substrate and the recess feed distance were varied. Antenna 2 (procedure 2 – [3]) provided the best results with a resonant frequency of 1800 MHz and a return loss of 21 dB. It was found that the error between experimental and simulation resonant frequency is generally 5% or less. This error increases as the dielectric height increases, and as the recess distance increases. Simulation results roughly follow the experimental results trend

    Data Cleaning in the Energy Domain

    Get PDF
    This dissertation addresses the problem of data cleaning in the energy domain, especially for natural gas and electric time series. The detection and imputation of anomalies improves the performance of forecasting models necessary to lower purchasing and storage costs for utilities and plan for peak energy loads or distribution shortages. There are various types of anomalies, each induced by diverse causes and sources depending on the field of study. The definition of false positives also depends on the context. The analysis is focused on energy data because of the availability of data and information to make a theoretical and practical contribution to the field. A probabilistic approach based on hypothesis testing is developed to decide if a data point is anomalous based on the level of significance. Furthermore, the probabilistic approach is combined with statistical regression models to handle time series data. Domain knowledge of energy data and the survey of causes and sources of anomalies in energy are incorporated into the data cleaning algorithm to improve the accuracy of the results. The data cleaning method is evaluated on simulated data sets in which anomalies were artificially inserted and on natural gas and electric data sets. In the simulation study, the performance of the method is evaluated for both detection and imputation on all identified causes of anomalies in energy data. The testing on utilities\u27 data evaluates the percentage of improvement brought to forecasting accuracy by data cleaning. A cross-validation study of the results is also performed to demonstrate the performance of the data cleaning algorithm on smaller data sets and to calculate an interval of confidence for the results. The data cleaning algorithm is able to successfully identify energy time series anomalies. The replacement of those anomalies provides improvement to forecasting models accuracy. The process is automatic, which is important because many data cleaning processes require human input and become impractical for very large data sets. The techniques are also applicable to other fields such as econometrics and finance, but the exogenous factors of the time series data need to be well defined

    Probabilistic Anomaly Detection in Natural Gas Time Series Data

    Get PDF
    This paper introduces a probabilistic approach to anomaly detection, specifically in natural gas time series data. In the natural gas field, there are various types of anomalies, each of which is induced by a range of causes and sources. The causes of a set of anomalies are examined and categorized, and a Bayesian maximum likelihood classifier learns the temporal structures of known anomalies. Given previously unseen time series data, the system detects anomalies using a linear regression model with weather inputs, after which the anomalies are tested for false positives and classified using a Bayesian classifier. The method can also identify anomalies of an unknown origin. Thus, the likelihood of a data point being anomalous is given for anomalies of both known and unknown origins. This probabilistic anomaly detection method is tested on a reported natural gas consumption data set

    Data Improving in Time Series Using ARX and ANN Models

    Get PDF
    Anomalous data can negatively impact energy forecasting by causing model parameters to be incorrectly estimated. This paper presents two approaches for the detection and imputation of anomalies in time series data. Autoregressive with exogenous inputs (ARX) and artificial neural network (ANN) models are used to extract the characteristics of time series. Anomalies are detected by performing hypothesis testing on the extrema of the residuals, and the anomalous data points are imputed using the ARX and ANN models. Because the anomalies affect the model coefficients, the data cleaning process is performed iteratively. The models are re-learned on “cleaner” data after an anomaly is imputed. The anomalous data are reimputed to each iteration using the updated ARX and ANN models. The ARX and ANN data cleaning models are evaluated on natural gas time series data. This paper demonstrates that the proposed approaches are able to identify and impute anomalous data points. Forecasting models learned on the unclean data and the cleaned data are tested on an uncleaned out-of-sample dataset. The forecasting model learned on the cleaned data outperforms the model learned on the unclean data with 1.67% improvement in the mean absolute percentage errors and a 32.8% improvement in the root mean squared error. Existing challenges include correctly identifying specific types of anomalies such as negative flows

    Time Series Outlier Detection and Imputation

    No full text
    This paper proposed the combination of two statistical techniques for the detection and imputation of outliers in time series data. An autoregressive integrated moving average with exogenous inputs (ARIMAX) model is used to extract the characteristics of the time series and to find the residuals. The outliers are detected by performing hypothesis testing on the extrema of the residuals and the anomalous data are imputed using another ARIMAX model. The process is performed in an iterative way because at the beginning the process, the residuals are contaminated by the anomalies and therefore, the ARIMAX model needs to be re-learned on “cleaner” data at every step. We test the algorithm using both synthetic and real data sets and we present the analysis and comments on those results
    corecore